Goto

Collaborating Authors

 research space


SoK: On the Semantic AI Security in Autonomous Driving

Shen, Junjie, Wang, Ningfei, Wan, Ziwen, Luo, Yunpeng, Sato, Takami, Hu, Zhisheng, Zhang, Xinyang, Guo, Shengjian, Zhong, Zhenyu, Li, Kang, Zhao, Ziming, Qiao, Chunming, Chen, Qi Alfred

arXiv.org Artificial Intelligence

Autonomous Driving (AD) systems rely on AI components to make safety and correct driving decisions. Unfortunately, today's AI algorithms are known to be generally vulnerable to adversarial attacks. However, for such AI component-level vulnerabilities to be semantically impactful at the system level, it needs to address non-trivial semantic gaps both (1) from the system-level attack input spaces to those at AI component level, and (2) from AI component-level attack impacts to those at the system level. In this paper, we define such research space as semantic AI security as opposed to generic AI security. Over the past 5 years, increasingly more research works are performed to tackle such semantic AI security challenges in AD context, which has started to show an exponential growth trend. In this paper, we perform the first systematization of knowledge of such growing semantic AD AI security research space. In total, we collect and analyze 53 such papers, and systematically taxonomize them based on research aspects critical for the security field. We summarize 6 most substantial scientific gaps observed based on quantitative comparisons both vertically among existing AD AI security works and horizontally with security works from closely-related domains. With these, we are able to provide insights and potential future directions not only at the design level, but also at the research goal, methodology, and community levels. To address the most critical scientific methodology-level gap, we take the initiative to develop an open-source, uniform, and extensible system-driven evaluation platform, named PASS, for the semantic AD AI security research community. We also use our implemented platform prototype to showcase the capabilities and benefits of such a platform using representative semantic AD AI attacks.


Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society

Prunkl, Carina, Whittlestone, Jess

arXiv.org Artificial Intelligence

One way of carving up the broad "AI ethics and society" research space that has emerged in recent years is to distinguish between "near-term" and "long-term" research. While such ways of breaking down the research space can be useful, we put forward several concerns about the near/long-term distinction gaining too much prominence in how research questions and priorities are framed. We highlight some ambiguities and inconsistencies in how the distinction is used, and argue that while there are differing priorities within this broad research community, these differences are not well-captured by the near/long-term distinction. We unpack the near/long-term distinction into four different dimensions, and propose some ways that researchers can communicate more clearly about their work and priorities using these dimensions. We suggest that moving towards a more nuanced conversation about research priorities can help establish new opportunities for collaboration, aid the development of more consistent and coherent research agendas, and enable identification of previously neglected research areas.


Baidu Expands U.S. Research Space With New Silicon Valley Site

#artificialintelligence

Baidu Inc. plans to double its footprint in Silicon Valley with a second research and development facility, seeking to gain an edge in artificial intelligence technology. China's largest search engine provider will add capacity for 150 employees, the company said Friday in a statement. Baidu has about 200 people at its existing site in Sunnyvale, California. The two offices will be a mile apart. "It is becoming increasingly important in Baidu's global strategy as a base for attracting world-class talent," the company said in the statement.